-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feat/retries on conflict error #74
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also just so you know, the workflows were broken for forks (thats why they were failing). We've fixed it but it will now run every workflow twice in this and the other pr. If you want you can rebase the branches with main and the errors will go away.
27c4a65
to
05b88bf
Compare
@jonathan-mayer rebased and included the previous suggestions. Just a couple of things to note:
|
Ill have a look over it.
I do get that it detects it as duplicate, but im not entirely sure how to avoid it. I think in theory it is avoidable by just having list and get in separate functions. Another way would also be to refactor to use the dynamic client again, although i would still like to avoid that.
I think we could refactor the function called in the go routine out. |
|
@jonathan-mayer I refactored the go func out of the |
i think the current function name should be good for now. Although i would not put a anonymous function in the attemptScan function, but instead just run the attemptScan function in a goroutine so |
Don't worry Jonathan, absolutely no rush take your time |
I think we should change up the GetResource funcs again. They should only handle listing and then every scalableResouce should have a re-get function which regets itself from kubernetes. it could like something like this: func (c *cronJob) Reget(clientsets *Clientsets, ctx context.Context) {
*c, err = clientsets.Kubernetes.BatchV1().CronJobs(c.Namespace).Get(c.Name, ctx,, metav1.GetOptions{})
if err != nil {
// TODO handle error
}
} |
Implemented, the linter still complains about 2 things:
I mean the first one could be true, for the second one to me it seems legit to return an interface type in that case |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looked at main and clientgo
refactored the way you suggested, still having "duplicate" suggestion on linting. I think we can't do much about that |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this should be good to go
Oh actually add the nolints for the dupl linter suggestion, then we can merge |
Done |
Motivation
Give the user the possibility to choose how to handle HTTP 409 conflict errors. Such conflicts typically occur when another entity (such as an HPA, CI/CD pipeline, or manual intervention) modifies a resource just before KubeDownscaler processes it
See #68 or caas-team/py-kube-downscaler#111
Changes
--max-retries-on-conflict
argument like in Py-Kube-Downscalerkubectl get deploy -n default
). the old GetWorkload() was renamed to GetWorkloads() to reflect the changes--max-retries-on-conflict
Tests done
TODO